• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

±¹³» ÇÐȸÁö

Ȩ Ȩ > ¿¬±¸¹®Çå > ±¹³» ÇÐȸÁö > µ¥ÀÌÅͺ£À̽º ¿¬±¸È¸Áö(SIGDB)

µ¥ÀÌÅͺ£À̽º ¿¬±¸È¸Áö(SIGDB)

Current Result Document :

ÇѱÛÁ¦¸ñ(Korean Title) Áßø ¼Ó¼ºÀ» È°¿ëÇÑ ·£´ý ¾ð´õ »ùÇøµ ±â¹ÝÀÇ °øÁ¤¼º °³¼± ±â¹ý
¿µ¹®Á¦¸ñ(English Title) Random Under Sampling-Based Fairness Improvement Technique Using Overlapping Attribute
ÀúÀÚ(Author) °­´ë¿ø   ±ÇÁØÈ£   ÀüÁ¾ÈÆ   Daewon Kang   Joonho Kwon   Jonghoon Chun  
¿ø¹®¼ö·Ïó(Citation) VOL 38 NO. 03 PP. 0016 ~ 0034 (2022. 08)
Çѱ۳»¿ë
(Korean Abstract)
±â°èÇнÀ ¸ðµ¨ÀÌ »ç¶÷°ú °ü·ÃµÈ ºÐ¾ß¿¡ È°¿ëµÇ±â ½ÃÀÛÇϸ鼭 ¸ðµ¨ÀÇ °øÁ¤¼º ¹®Á¦°¡ ÁÖ¸ñ¹Þ°í ÀÖ´Ù. °øÁ¤¼º ¹®Á¦¶õ ¼ºº°À̳ª ÀÎÁ¾ °°Àº ¹Î°¨ ¼Ó¼º(Sensitive Attribute)±â¹ÝÀÇ Æ¯Á¤ ±×·ìÀÌ ±â°èÇнÀ ¸ðµ¨·ÎºÎÅÍ Å¸ ±×·ì¿¡ ºñÇØ ÆíÇâµÈ °áÁ¤À» ¹Þ´Â °ÍÀ» ¸»ÇÑ´Ù. °øÁ¤¼º ¹®Á¦ÀÇ ¿øÀÎ Áß Çϳª·Î µ¥ÀÌÅÍ ¼ÂÀÇ ºÒ±ÕÇüÀÌ ¹àÇôÁö¸é¼­ ºÒ±ÕÇü µ¥ÀÌÅÍ °ü·Ã °øÁ¤¼º ¿¬±¸°¡ È°¹ßÈ÷ ÁøÇàµÇ°í ÀÖ´Ù. ÀüÅëÀûÀ¸·Î µ¥ÀÌÅÍ ºÒ±ÕÇüÀ¸·Î ÀÎÇÑ ÆíÇâÀ» Á¦°ÅÇÏ´Â ¹æ¹ýÀº ¿À¹ö »ùÇøµ(Over Sampling) ±â¹ý°ú ¾ð´õ »ùÇøµ(Under Sampling) ±â¹ýÀÌ ÀÖ´Ù. ÇÏÁö¸¸ ÀüÅëÀûÀÎ µ¥ÀÌÅÍ ºÒ±ÕÇü ¿ÏÈ­ ±â¹ýÀº ¸ðµ¨ÀÇ °øÁ¤¼º Çâ»ó¿¡ ¾î·Á¿òÀÌ ÀÖ´Ù. µû¶ó¼­ º» ³í¹®Àº Áßø ¼Ó¼ºÀ» È°¿ëÇÑ ¾ð´õ »ùÇøµ ±â¹ÝÀÇ °øÁ¤¼º °³¼± ±â¹ýÀ» Á¦¾ÈÇÑ´Ù. º» ±â¹ýÀº µ¥ÀÌÅÍ ¼ÂÀÇ ÀÏ¹Ý ¼Ó¼º Áß ¸ðµ¨ÀÇ ¼º´É°ú °øÁ¤¼º ¾çÂÊ¿¡ ¿µÇâÀ» ÁÖ´Â Áßø ¼Ó¼ºÀ» ±¸ÇÑ´Ù. ±×¸®°í Áßø ¼Ó¼º, ·¹À̺í, ¹Î°¨ ¼Ó¼ºÀÇ °íÀµ°ªÀ» ±âÁØÀ¸·Î ¼­ºê ±×·ìÀ» ¸¸µé°í ·£´ý ¾ð´õ »ùÇøµÀ» ÅëÇØ ¼­ºê ±×·ì °£ÀÇ µ¥ÀÌÅÍ ºÒ±ÕÇüÀ» ¿ÏÈ­ÇÑ´Ù. º» ±â¹ýÀÇ ¼º´ÉÀº ´Ù¾çÇÑ µ¥ÀÌÅÍ ¼Â, ÀÓÀÇ ºÐÇÒ±â¹ý°ú 10¹øÀÇ ½ÇÇèÀ» ÅëÇØ Æò°¡Çß´Ù. ±×¸®°í °øÁ¤¼º ÁöÇ¥·Î ±Õµî ±âȸ, ±Õµî ½Â·ü, ±Õµî ´ë¿ì, Àα¸Åë°è Æи®Æ¼¸¦ »ç¿ëÇÏ¿© °øÁ¤¼ºÀÌ Çâ»óµÈ °á°ú¸¦ È®ÀÎÇÏ¿´´Ù.
¿µ¹®³»¿ë
(English Abstract)
As machine learning models have begin to be used in human-related fields, the fairness of models is drawing attention. Fairness problems refer to certain groups based on sensitive attributes such as gender or race receiving biased decisions in machine learning models compared to other groups. As the imbalance of the dataset has been revealed as one of the causes of the fairness problem, research on fairness related to the imbalance data is being actively conducted. Traditionally, methods for eliminating bias due to data imbalances are over sampling and under sampling techniques. However, traditional data imbalance mitigation techniques have difficulty improving the fairness of the model. Therefore, this paper proposes Random Under Sampling-Based Fairness Improvement Technique Using Overlapping Attribute. This technique finds the overlapping attribute that affect both the performance and fairness of the model among the general attribute of the data set. We then create subgroups based on eigenvalues of overlapping attribute, labels, and sensitive attribute, and mitigate data imbalances between subgroups through random under sampling. The performance of this method was evaluated through different data sets, random partitioning methods, and 10 replicates. In addition, the results of improved fairness were confirmed by using Equal Opportunity, Equalized Odds, Treatment Equality, and Demographic Parity as fairness indicators.
Å°¿öµå(Keyword) °øÁ¤¼º   ±â°èÇнÀ   µ¥ÀÌÅÍ Á᫐ ÀΰøÁö´É   µ¥ÀÌÅÍ ºÒ±ÕÇü ¿ÏÈ­   ÆíÇâ ¿ÏÈ­   Fairness   Machine learning   Data-centric AI   Data imbalance mitigation   Bias mitigation  
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå